GPU and MIG layout¶
This page describes the current DGX GPU partitioning used by Slurm.
Current layout¶
As of February 21, 2026:
| Resource pool | Quantity | Used by partition(s) |
|---|---|---|
A100 80GB full GPUs (nvidia_a100-sxm4-80gb) |
2 | prod80 |
3g.40gb MIG instances (nvidia_a100_3g.40gb) |
2 | prod40 |
1g.10gb MIG instances (nvidia_a100_1g.10gb) |
7 | interactive10, prod10 |
| DGX display GPU | 1 | system/display (not a user compute partition) |
Relation with Slurm partitions¶
interactive10: one1g.10gbMIG per job (interactive usage).prod10: one1g.10gbMIG per batch job.prod40: one3g.40gbMIG per batch job.prod80: one fullA100 80GBGPU per batch job.
For submission syntax and user workflow, see Slurm (quick guide). For technical policy (defaults/caps), see Advanced partitions.
How to verify on the node¶
nvidia-smi -L
scontrol show node $(hostname -s) | egrep 'Gres=|CfgTRES|State='
If GPU partitioning changes, this page and the partition pages must be updated together.